Parallel and distributed computing has been the basis to many emerging areas, such as smart networks, cloud computing, big data analysis, and blockchain technology. Without the development of parallel and distributed computing technologies and various types of systems, it is not possible to meet the requirement on efficiency, accuracy, scalability, and reliability for various critical applications that support our modern economy and society. While parallel and distributed computing played a vital role in modern science, engineering, biology, medicine, pharmacy, astronomy, geology, and archaeology, its application has also been extended to business, finance, economics, management, government, and defense, covering all aspects of our modern society and life. Furthermore, parallel and distributed computing has emerged in recent advances of many hotspot research directions including artificial intelligence, machine learning, Internet of Things, bioinformatics, digital medicine, cybersecurity, and social computing, resulting in numerous ground-breading discoveries that are changing our society and life. The purpose of this special issue on “Parallel and Distributed Computing, Applications and Technologies” is to display some recent developments that address challenging problems in the theory, technology, and modern applications related to this competitive field. The content of this special issue covers different topics in both core areas of parallel and distributed computing including architectures and algorithms, and interdisciplinary areas in applications and technologies ranging from machine learning to wireless sensor networks. It includes papers selected from both extended versions of accepted papers at the 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT 2019), as well as new submissions to this special issue open to public. During the review process, all submitted papers have been carefully reviewed by at least three reviewers. After evaluating the scores and reports from the reviewers, we have selected 11 papers for inclusion in this special issue. The following is a brief description of the selected papers in this special issue. In the paper by Zou et al.,1 the barrier coverage for wireless sensor networks has been addressed. A new model using sink-based mobile sensors is proposed to prolong the lifespan of the coverage. In this model, a given line barrier is covered by mobile sensors emitted by several distributed sink stations where the maximum movement of the sensors were minimized. In this way, the sensor node with the shortest lifespan is prolonged. Through theoretical and experimental analysis, the authors showed the runtime to be linear in the number of sinks which reach the optimal bound. The paper by Kayesh et al.2 studies how to detect event causality from tweets. This is a challenging task because tweets are short, unstructured, and often written in highly informal language which lacks enough contextual information to detect causality. The authors claim that none of the existing approaches tackle the issue related to the lack of contextual information. They applied a context word extension technique and a deep causal event detection model to address this issue. Their model showed improvements in recall and detection accuracy in comparison with the existing approaches. In the paper by Cao and Shen,3 the authors address the drawbacks in dealing with imbalanced data. They proposed an improved clustering with stratified sampling technique which improves the classification performance of SVMs for imbalanced datasets. They sample data differently according to the type of classes. They also extended their method to an ensemble classifier which uses multiple base SVM classifiers for prediction. Through comparison, they demonstrated that their classification on several imbalanced datasets is more effective than existing sampling methods. In the paper by Quan et al.,4 the authors proposed a practical way to fix error correction for quantum computing. As an important family member of quantum error correction codes (QECCs), Reed–Muller quantum codes (RMQCs) have attracted much interest in studying the universal fault-tolerant gate set. The authors investigated the fault-tolerant logical Hadamard gates in RMQCs. At a reasonable higher cost, their method is able to achieve a higher success rate for error correction. By using the logical Hadamard gate, the authors claim a universal fault-tolerant gate set is achieved in single RMQC. In the paper by Zhou et al.,5 the inference speed of convolution neural networks is studied. They proposed a pipelining strategy of single instruction and multiple data instructions to optimize the process of the 3 × 3 convolution on ARM-based CPUs. After implementing this strategy in practice, their speed is largely improved according to their performance profiling measurement. They also enabled multithread processing and the speedup reaches 18.3 times compared with the single thread unoptimized version. In the paper by Chen et al.,6 the authors studied the mobile recharging problem for wireless sensor networks. They proposed an adaptive real-time on-demand charging scheduling scheme. By using an adaptive charging mode based on the number of charging requests, their charging scheme gains an improvement in efficiency because the location-generated charging cost and an energy-driven charging priority are considered. Their simulation results showed a better performance in charging throughput, average charging latency, and charge scheduling time compared with existing approaches. In the paper by Zhang et al.,7 the authors focused on a deep learning algorithm for Chinese characters of criminal cases. They applied text embedding when preprocessing the characters and words. CNN is used to extract features from the criminal database. Their classification by CNN-LSTM has been proved effective for Chinese characters. In the paper by Farahabady et al.,8 the authors studied how to efficiently schedule the shared resources and keep the system's performance for virtual services in a virtualized computer system. They proposed a resource allocation controller which uses a fully polynomial-time randomized approximation scheme to enable performance isolation of concurrent I/O requests. This controller aims to minimize the degree of total quality of service (QoS) violation incidents in the entire platform. Their work demonstrates that the QoS violation incidents are reduced by 32% compared with the default resource allocation policy embedded in the existing Linux container layer. In the paper by Guermouche and Orgerie,9 the authors discussed the limits of physical architectures and increasing dark silicon use in the race for computing power. They studied the impact of vectorization and thermal design power (TDP) on runtime and processors, as well as the DRAM power consumption. They considered three architectures and five applications with different behaviors. They showed that although using SIMD instructions with larger register sizes improves performance and energy consumption, it has a negative impact on both DRAM and processor power consumption. They also claimed AVX512 may have a different behavior where its power consumption is lower than the other instructions despite providing better performance. They also showed that turboboost may lead to better performance, but at the cost of higher energy consumption depending on the architecture. This work can be regarded as a valuable study over four generations of vectorization on recent hardware for representative HPC benchmarks. In the paper by Janssen et al.,10 the authors addressed how to use genetic algorithms to find near-optimal solutions for nondeterministic polynomial-hard problems. They employed the parallel processing capability of a graphics processing unit and Nvidia's CUDA programming platform. Their method achieved significant computational speedups. In the paper by Yao et al.,11 the authors proposed an efficient compression algorithm for large collections of FASTA genomes which can be used in the distributed system of cloud computing. They proposed two optimization schemes based on HRCM compression method. Their compression ratio is significantly improved. Their methods also showed improvement in robustness and scalability. The selected papers of this special issue cover a variety of interesting topics reflecting some recent developments in theoretical and practical research in both core and interdisciplinary areas of parallel and distributed computing, applications, and technologies. With the ever-increasing scale and complexity in computation and data analytics, parallel computing plays a vital role in provision of the required efficiency, accuracy, scalability, and reliability. We hope that this special issue provides valuable information and references for pursuit of research in the field of parallel and distributed computing and its relevant areas. Finally, the guest editors would like to extend their sincere thanks to all authors for their valuable contributions to this special issue and also to all the reviewers for the time and effort in providing their reviews.